WOLFE: Strength Reduction and Approximate Programming for Probabilistic Programming

نویسندگان

  • Sebastian Riedel
  • Sameer Singh
  • Vivek Srikumar
  • Tim Rocktäschel
  • Larysa Visengeriyeva
  • Jan Nößner
چکیده

Existing modeling languages lack the expressiveness or efficiency to support many modern and successful machine learning (ML) models such as structured prediction or matrix factorization. We present WOLFE, a probabilistic programming language that enables practitioners to develop such models. Most ML approaches can be formulated in terms of scalar objectives or scoring functions (such as distributions) and a small set of mathematical operations such as maximization and summation. In WOLFE, the user works within a functional host language to declare scalar functions and invoke mathematical operators. The WOLFE compiler then replaces the operators with equivalent, but more efficient (strength reduction) and/or approximate (approximate programming) versions to generate low-level inference or learning code. This approach can yield very concise programs, high expressiveness and efficient execution. Introduction Existing probabilistic programming languages face a tradeoff between the expressivity of the models they represent and complexity of the modeling process for the user. Towards one end of the complexity spectrum, we have languages that are limited (by design and often intentionally) in the types of machine learning models and approaches they support. For example, Church (Goodman et al. 2008) is a powerful tool for generative models, but discriminatively trained structured prediction models are difficult to realize. Languages such as Markov Logic Networks (Richardson and Domingos 2006) can train discriminative models, but fail to support paradigms such as matrix or tensor factorization. In contrast, libraries such as FACTORIE (McCallum, Schultz, and Singh 2009) can be used to create arbitrarily rich models, but the burden is often on the user to provide efficient algorithms and data structures to operate such models, requiring machine learning expertise. There is a need for an extensible probabilistic programming language that can Copyright © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. express a variety of models and algorithms while still retaining the simplicity and conciseness of existing paradigmspecific, declarative languages. In this paper, we introduce WOLFE,1 a functional probabilistic programming language that allows users to define rich models in a concise way (akin to mathematical expressions seen in machine learning papers) which are then complied into efficient implementations, thus combining ease of use with expressiveness and efficient computation. WOLFE is based on a formulation that is common to many machine learning algorithms and models: real-valued/scalar functions (for example corresponding to density functions, energy functions, discriminative scores, and training objectives) and mathematical operators that operate upon them (most prominently maximization, summation, and integration). The user constructs the relevant scalar functions in a functional host programming language,2 and uses mathematical operators such as argmax to operate upon them, thus enabling a rich set of concisely defined ML models. The semantics of a WOLFE program is defined by the mathematical operators, and further, each WOLFE program is an executable program in the host language (albeit with brute force default implementations of the operators that are often intractable). To ensure efficiency, the WOLFE compiler analyzes the user provided scalar function definitions and generates optimized source code in the host language. WOLFE performs two types of optimizations: strength reduction to replace brute-force operator implementations (such as exhaustive search) with equivalent but more efficient versions (such as Max-Product in a junction tree); and approximate programming where operator calls are replaced by efficient approximations (for example parallelized stochastic optimization). This separation of program semantics and implementation provides a number of crucial benefits: (1) The user’s focus is on the mathematical formulation using the host lanhttp://www.wolfe.ml Currently, we use Scala (Odersky, Spoon, and Venners 2008). Statistical Relational AI: Papers from the AAAI-14 Workshop

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Probabilistic-Risky Programming Models in Identifying Optimized Pattern of Cultivation under Risk Conditions (Case Study: Shoshtar Region)

Using Telser and Kataoka models of probabilistic-risky mathematical programming, the present research is to determine the optimized pattern of cultivating the agricultural products of Shoshtar region under risky conditions. In order to consider the risk in the mentioned models, time period of agricultural years 1996-1997 till 2004-2005 was taken into account. Results from Telser and Kataoka mod...

متن کامل

Approximate Incremental Dynamic Analysis Using Reduction of Ground Motion Records

Incremental dynamic analysis (IDA) requires the analysis of the non-linear response history of a structure for an ensemble of ground motions, each scaled to multiple levels of intensity and selected to cover the entire range of structural response. Recognizing that IDA of practical structures is computationally demanding, an approximate procedure based on the reduction of the number of ground m...

متن کامل

Multi-item inventory model with probabilistic demand function under permissible delay in payment and fuzzy-stochastic budget constraint: A signomial geometric programming method

This study proposes a new multi-item inventory model with hybrid cost parameters under a fuzzy-stochastic constraint and permissible delay in payment. The price and marketing expenditure dependent stochastic demand and the demand dependent the unit production cost are considered. Shortages are allowed and partially backordered. The main objective of this paper is to determine selling price, mar...

متن کامل

Stochastic Approach to Vehicle Routing Problem: Development and Theories

Stochastic Approach to Vehicle Routing Problem: Development and Theories Abstract In this article, a chance constrained (CCP) formulation of the Vehicle Routing Problem (VRP) is proposed. The reality is that once we convert some special form of probabilistic constraint into their equivalent deterministic form then a nonlinear constraint generates. Knowing that reliable computer software...

متن کامل

On Sequential Optimality Conditions without Constraint Qualifications for Nonlinear Programming with Nonsmooth Convex Objective Functions

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Here, nonsmooth approximate gradient projection and complementary approximate Karush-Kuhn-Tucker conditions are presented. These sequential optimality conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constrai...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014